-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Show warning message when last user input get pruned #4816
base: main
Are you sure you want to change the base?
Show warning message when last user input get pruned #4816
Conversation
✅ Deploy Preview for continuedev canceled.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Jazzcort this could be a great addition, could you explore solutions that avoid injecting a new warning message type into the chat messages, along with a more subtle warning UI?
@RomneyDa I'll try to find another way to send the warning message back to the webview. |
@Jazzcort I'd be interested in having this sort of "stream warning" idea as well so fell free to bounce approach/ideas here before spending too much time on them! I think there could be several different approaches where the warnings aren't persisted to chat history, maybe passing them with streams but with a "warning:" field that is captured in redux |
I've already implemented this second approach in the following branch: Jazzcort/warn-when-truncate-last-msg-v2. Instead of sending the warning through the stream, I used the messenger system to deliver the warning message. With this approach, I make the pruning behavior occur before calling Regarding the "warning:" field, are you suggesting adding it to AssistantChatMessage? I think that could work as well! I'm open to either approach—whichever aligns better with the project's design. We can also discuss the UI implementation afterward. |
Another idea would be to extend @Jazzcort's last approach and have two separate calls from webview => core.
or something like that. (llm.streamChat could still prune itself when it's not being invoked from the chatview.) This would avoid having to worry about the interaction between warnings and streaming. It could also be potentially useful for some other things:
|
I agree with @owtaylor's suggestion. Calling llm/pruneChat before llm/streamChat not only helps manage context length but also provides control over whether llm/streamChat is called at all. Users might not focus too much on the last response when they see the warning message. Additionally, leveraging Context stability - taking advantage of prefix caching is a great strategy. It can enhance the user experience by reducing response time when the context limit is reached. @RomneyDa If everything sounds great, I'll start working on it. |
Planning to look into this tomorrow midday! |
@Jazzcort @owtaylor would agree that two separate calls is a good approach, especially having the warning up front would be great and it won't effect all the other uses of @sestinj tagging since touches core streaming |
I'm planing to implement |
50920c6
to
495ae05
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good to me. Just two suggestions.
core/core.ts
Outdated
|
||
if (completionOptions.model === "o1") { | ||
completionOptions.stream = false; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, duplicating this type of logic into two places is going to make things difficult to maintain in the future. I would suggest making model._compileChatMessages() public [and not _-prefixed] and using that. You could make it call this._modifyCompletionOptions() itself as well, since calling that multiple times shouldn't hurt.
if (lastMessageTruncated) { | ||
dispatch( | ||
setWarningMessage( | ||
"The context has reached its limit. This may lead to less accurate answers.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"the context has reached its limit" sounds like the entire chat is too long. Maybe:
The provided context items are too large. They have been truncated to fit within the model's context length.
It's a bit verbose but:
- Say more specifically what happened
- Contain the phrase "context length" so that if the user searches they can find out what the context length is.
495ae05
to
173c73a
Compare
core/core.ts
Outdated
const model = await this.configHandler.llmFromTitle(modelName); | ||
|
||
options.log = undefined; | ||
const completionOptions: CompletionOptions = mergeJson( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, thinking about it, it's probably better to have a public llm.compileChatMessages()
that takes the LLMFullCompletionOptions, and have that and streamChat() call _compileChatMessages()
- sorry for not suggesting that on the previous round.
173c73a
to
064164a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good to me. @RomneyDa - do you still think the warning needs to be more subtle? From my perspective, when the last message is truncated, the results are unlikely to be good.
Not to make this a design by committee, but my only feedback is that I think the warning should be yellow rather than red. Red feels a bit too scary for the severity of this warning imo. Great contribution though, thanks @Jazzcort ! 👌 |
First, I integrated llm/compileChat to precompile chat messages before invoking llm/streamChat. The llm/compileChat function returns both the compiled messages and a boolean flag indicating whether the most recent user input has been pruned. This allows us to trigger a warning to notify users when pruning occurs at the last message. The messageOptions is introduced to streamChat as an optional parameter. If we pass messageOptions and the precompiled attribute is set to true, streamChat will skip the process of compiling chat messages, ensuring that the messages won't go through the pruning process twice.
064164a
to
f97736e
Compare
@Patrick-Erichsen I’ve updated the warning message color to yellow—thanks for the feedback! |
Description
If the user's last input is pruned due to context overflow, a warning message will be displayed in the chat section, alerting them that some details may have been lost. As a result, the response they receive might be incomplete or inaccurate due to the truncated input.
Granite-Code/granite-code#22
Screenshots
Testing instructions
Set the model’s context length to a small value (e.g., 512) and ask a question that exceeds
contextLength
-maxTokens
tokenwise. A warning message will appear at the bottom of the chat section, indicating that some input may have been truncated. Deleting previous messages will remove the warning.